这项工作提出了一种新的动力学的新运动学,该动力学与单个执行器有关,可以实现三方握力,也可以实现侧向握力。受三位生假体的启发,比多物质假体更简单,更健壮和便宜,这种新的运动学旨在提出可访问的假体(负担得起的,易于使用,易于使用,健壮,易于修复)。使用电缆代替刚性杆来传递上指和拇指的动作。本文详细介绍了方法和设计选择。总而言之,通过实验用户对原型的评估导致对结果的首次讨论。
translated by 谷歌翻译
Cuspidal机器人是具有至少两种逆运动溶液的机器人,可以通过无奇异路径连接。过去已经研究了普通3R机器人的尖锐性,但是将研究扩展到六度自由的机器人可能是一个具有挑战性的问题。许多机器人可以与真实代数集一起建模为多项式图,以便将尖的概念扩展到这些数据。在本文中,我们设计了一种算法,该算法在输入$ n $不确定的多项式地图上,而$ s $多项式在相同的不确定的情况下描述了一个真实的代数$ d $,请确定地图限制的限制性正在考虑的实际代数设定。此外,如果$ d $和$ \ \ tau $分别是输入多项式系数的最高学位和界限,则该算法在$ \ tau $中以$ \ tau $和$($( (s+d)d)^{o(n^2)} $。它依赖于计算机代数中的许多高级算法,这些算法在真实代数集和多项式图的关键基因座上使用高级方法。据我们所知,这是第一种从一般角度解决尖锐性问题的算法。
translated by 谷歌翻译
肌肉骨骼障碍(MSD)是尤其是体力劳动中的主要健康问题之一,特别是在手动处理工作中。在几个文献中,肌肉疲劳被认为与MSD密切相关,特别是对于肌肉相关疾病。除了许多现有的肌肉疲劳评估和MSD风险分析的分析技术外,提出了一种新的肌肉疲劳模型。新的拟议模型反映了外部负载,工作量历史和个体差异的影响。该模型在数学中很简单,可以在实时计算中轻松应用,例如实时虚拟工作模拟和评估中的应用。通过比较计算的METS,并用3个现有的动态模型进行定性或定量验证,使用24个现有静态模型进行数学验证。该建议的模型显示了预测所有24个静态模型的高度或中等相似之处。验证结果与三种动态模型也有望。模型的主要限制是,它仍然缺乏更具动态情况的实验验证。与行业肌肉疲劳的相关性是导致工业MSDS的主要原因之一,特别是对身体工作。对肌肉疲劳的正确评估是确定工作休息方案的必要条件,并降低MSD的风险。
translated by 谷歌翻译
Classical reinforcement learning (RL) techniques are generally concerned with the design of decision-making policies driven by the maximisation of the expected outcome. Nevertheless, this approach does not take into consideration the potential risk associated with the actions taken, which may be critical in certain applications. To address that issue, the present research work introduces a novel methodology based on distributional RL to derive sequential decision-making policies that are sensitive to the risk, the latter being modelled by the tail of the return probability distribution. The core idea is to replace the $Q$ function generally standing at the core of learning schemes in RL by another function taking into account both the expected return and the risk. Named the risk-based utility function $U$, it can be extracted from the random return distribution $Z$ naturally learnt by any distributional RL algorithm. This enables to span the complete potential trade-off between risk minimisation and expected return maximisation, in contrast to fully risk-averse methodologies. Fundamentally, this research yields a truly practical and accessible solution for learning risk-sensitive policies with minimal modification to the distributional RL algorithm, and with an emphasis on the interpretability of the resulting decision-making process.
translated by 谷歌翻译
Deep learning models are being increasingly applied to imbalanced data in high stakes fields such as medicine, autonomous driving, and intelligence analysis. Imbalanced data compounds the black-box nature of deep networks because the relationships between classes may be highly skewed and unclear. This can reduce trust by model users and hamper the progress of developers of imbalanced learning algorithms. Existing methods that investigate imbalanced data complexity are geared toward binary classification, shallow learning models and low dimensional data. In addition, current eXplainable Artificial Intelligence (XAI) techniques mainly focus on converting opaque deep learning models into simpler models (e.g., decision trees) or mapping predictions for specific instances to inputs, instead of examining global data properties and complexities. Therefore, there is a need for a framework that is tailored to modern deep networks, that incorporates large, high dimensional, multi-class datasets, and uncovers data complexities commonly found in imbalanced data (e.g., class overlap, sub-concepts, and outlier instances). We propose a set of techniques that can be used by both deep learning model users to identify, visualize and understand class prototypes, sub-concepts and outlier instances; and by imbalanced learning algorithm developers to detect features and class exemplars that are key to model performance. Our framework also identifies instances that reside on the border of class decision boundaries, which can carry highly discriminative information. Unlike many existing XAI techniques which map model decisions to gray-scale pixel locations, we use saliency through back-propagation to identify and aggregate image color bands across entire classes. Our framework is publicly available at \url{https://github.com/dd1github/XAI_for_Imbalanced_Learning}
translated by 谷歌翻译
A wide variety of model explanation approaches have been proposed in recent years, all guided by very different rationales and heuristics. In this paper, we take a new route and cast interpretability as a statistical inference problem. We propose a general deep probabilistic model designed to produce interpretable predictions. The model parameters can be learned via maximum likelihood, and the method can be adapted to any predictor network architecture and any type of prediction problem. Our method is a case of amortized interpretability models, where a neural network is used as a selector to allow for fast interpretation at inference time. Several popular interpretability methods are shown to be particular cases of regularised maximum likelihood for our general model. We propose new datasets with ground truth selection which allow for the evaluation of the features importance map. Using these datasets, we show experimentally that using multiple imputation provides more reasonable interpretations.
translated by 谷歌翻译
In this paper, we identify the best learning scenario to train a team of agents to compete against multiple possible strategies of opposing teams. We evaluate cooperative value-based methods in a mixed cooperative-competitive environment. We restrict ourselves to the case of a symmetric, partially observable, two-team Markov game. We selected three training methods based on the centralised training and decentralised execution (CTDE) paradigm: QMIX, MAVEN and QVMix. For each method, we considered three learning scenarios differentiated by the variety of team policies encountered during training. For our experiments, we modified the StarCraft Multi-Agent Challenge environment to create competitive environments where both teams could learn and compete simultaneously. Our results suggest that training against multiple evolving strategies achieves the best results when, for scoring their performances, teams are faced with several strategies.
translated by 谷歌翻译
Words of estimative probability (WEP) are expressions of a statement's plausibility (probably, maybe, likely, doubt, likely, unlikely, impossible...). Multiple surveys demonstrate the agreement of human evaluators when assigning numerical probability levels to WEP. For example, highly likely corresponds to a median chance of 0.90+-0.08 in Fagen-Ulmschneider (2015)'s survey. In this work, we measure the ability of neural language processing models to capture the consensual probability level associated to each WEP. Firstly, we use the UNLI dataset (Chen et al., 2020) which associates premises and hypotheses with their perceived joint probability p, to construct prompts, e.g. "[PREMISE]. [WEP], [HYPOTHESIS]." and assess whether language models can predict whether the WEP consensual probability level is close to p. Secondly, we construct a dataset of WEP-based probabilistic reasoning, to test whether language models can reason with WEP compositions. When prompted "[EVENTA] is likely. [EVENTB] is impossible.", a causal language model should not express that [EVENTA&B] is likely. We show that both tasks are unsolved by off-the-shelf English language models, but that fine-tuning leads to transferable improvement.
translated by 谷歌翻译
Neural networks trained with ERM (empirical risk minimization) sometimes learn unintended decision rules, in particular when their training data is biased, i.e., when training labels are strongly correlated with undesirable features. To prevent a network from learning such features, recent methods augment training data such that examples displaying spurious correlations (i.e., bias-aligned examples) become a minority, whereas the other, bias-conflicting examples become prevalent. However, these approaches are sometimes difficult to train and scale to real-world data because they rely on generative models or disentangled representations. We propose an alternative based on mixup, a popular augmentation that creates convex combinations of training examples. Our method, coined SelecMix, applies mixup to contradicting pairs of examples, defined as showing either (i) the same label but dissimilar biased features, or (ii) different labels but similar biased features. Identifying such pairs requires comparing examples with respect to unknown biased features. For this, we utilize an auxiliary contrastive model with the popular heuristic that biased features are learned preferentially during training. Experiments on standard benchmarks demonstrate the effectiveness of the method, in particular when label noise complicates the identification of bias-conflicting examples.
translated by 谷歌翻译
计算优化问题解决方案解决方案的雅各布是机器学习中的一个核心问题,其应用程序在超参数优化,元学习,优化为层和数据集蒸馏中的应用程序,仅举几例。展开的分化是一种流行的启发式方法,它使用迭代求解器近似溶液,并通过计算路径区分它。这项工作提供了对梯度下降和Chebyshev方法的二次目标的这种方法的非反应收敛速率分析。我们表明,为了确保雅各布的融合,我们可以1)选择较大的学习率,导致快速渐近地收敛,但接受该算法可能具有任意长的燃烧阶段或2)选择较小的学习率直接但较慢的收敛性。我们将这种现象称为展开的诅咒。最后,我们讨论了相对于这种方法的开放问题,例如为最佳展开策略得出实用的更新规则,并与Sobolev正交多项式领域建立了新的联系。
translated by 谷歌翻译